-
Notifications
You must be signed in to change notification settings - Fork 2.6k
try-runtime
::follow-chain
- keep connection
#12167
try-runtime
::follow-chain
- keep connection
#12167
Conversation
it seems that this PR requires some companion PR, but I will need some assistance with it :| |
Basically, this PR breaks the builds of https://github.com/paritytech/polkadot/ and https://github.com/paritytech/cumulus/, so the two must be updated to match the changes. Once the corresponding changes are made, the companion PRs must be indicated in the PR description as stated here: https://github.com/paritytech/substrate/blob/master/docs/CONTRIBUTING.adoc#updating-polkadot-as-well. Chances are, updating Polkadot will be enough to fix Cumulus build. It's not mentioned anywhere, but Cumulus companion must be indicated in the Polkadot PR too (if Cumulus changes are required). |
Looks good to me. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All code looks good and makes sense to me but we already got something similar maybe it's not that obvious but you could actually pass in your own rpc client already in the Builder::OnlineConfig
example here. Maybe I am missing something why the RpcService
is needed?
Thus, then we don't need this new RpcService
and the rest of the RPC methods that are manually implemented could just be moved the RpcApi trait
but then you would need to import that trait in try_runtime cli
(it's implemented be RPC client by the jsonrpsee macro)
A follow-up to this PR, would be great if we could use the existing rpc API
from sc-rpc-api
instead of duplicating these rpc methods
all over the place but it may cause some extra deps see #12030
The RpcService
that you wrote is a bit cleaner but maybe we could merge RpcService
and Builder::Onlline
or something?
@niklasad1 From what I can see, now, there are a couple of things to consider / fix:
I would propose three-step solution:
Wdyt? |
Sounds good, I got a bit confused by having the |
…/keep-connection # Conflicts: # utils/frame/try-runtime/cli/src/commands/follow_chain.rs
507d0a3
to
b903d97
Compare
That's because of the recent changes merged in Substrate/Cumulus. Merging |
bot merge |
@pmikolajczyk41 polkadot address? |
@kianenigma added, thank you! |
/tip medium |
@kianenigma A medium tip was successfully submitted for pmikolajczyk41 (15fdtPm7jNN9VZk5LokKcmBVfmYZkCXdjPpaPVL2w7WgCgRY on polkadot). https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Frpc.polkadot.io#/treasury/tips |
* Refactor RPC module * Add flag to `follow-chain` * Multithreading remark * fmt * O_O * unused import * cmon * accidental removal reverted * remove RpcHeaderProvider * mut refs * fmt * no mutability * now? * now? * arc mutex * async mutex * async mutex * uhm * connect in constructor * remove dep * old import * another take * trigger polkadot pipeline * trigger pipeline
Problem statement
Currently, in
try-runtime::follow-chain
for each block we have to reconnect to the node:When the block-time is long, it makes sense obviously. However, for a short period (like we have in Aleph and possibly in Polkadot) it would probably be better to keep a single connection for the whole interaction.
Approach
We introduce
RpcService
which wraps RPC requests provided byremote_externalities
crate. It provides the same functionality, but in addition it can reuse the same client (connection) for every request. Additionally, we add a new flag totry-runtime::follow-chain
:keep_connection
with which we can specify whether the connection should be reused for fetching blocks.When substituting the new API, we preserved old behavior in all places (i.e. reconnecting).
Notes
One could argue that because of #8988, we shouldn't refactor
remote_externalities::rpc_api
module, but rather make some steps into reusing RPC utilities from other crate.However, this would go too far beyond the scope of the main task (reusing connection in
follow-chain
). Furthermore, we argue that the refactor made in this PR should make future consolidation easier.This resolves one of the ideas raised in: #12089
cc: @kianenigma
polkadot companion: paritytech/polkadot#5968
Polkadot address: 15fdtPm7jNN9VZk5LokKcmBVfmYZkCXdjPpaPVL2w7WgCgRY